15,582 research outputs found
Recommended from our members
Machine learning phases in statistical physics
Conventionally, the study of phases in statistical mechan- ics is performed with the help of random sampling tools. Among the most powerful are Monte Carlo simulations consisting of a stochastic importance sampling over state space and evaluation of estimators for physical quantities. The ability of modern machine learning techniques to classify, identify, or in- terpret massive data sets provides a complementary paradigm to the above approach to analyze the exponentially large number of states in statistical physics. In this report, it is demonstrated by application on Ising-type models that deep learning has potential wide applications in solving many-body statis- tical physics problems. In application of supervised learning, we showed that the feed-forward neural network can identify phases and phase transitions in the ferromagnetic Ising model and the convolutional neural network (CNN) is extremely powerful in classifying T = 0 and T = â phases in the Ising gauge model; In application of unsupervised learning, we illustrated that a deep auto-encoder constructed by stacked restricted Boltzmann machines (RBM)
is closely related to the renormalization group (RG) method well understood in modern physics and our reconstruction of Ising spin configurations in the ferromagnetic Ising model is similar to the hand-written digits reconstruction.Statistic
Evolution of binary stars and its implications for evolutionary population synthesis
Most stars are members of binaries, and the evolution of a star in a close
binary system differs from that of an ioslated star due to the proximity of its
companion star. The components in a binary system interact in many ways and
binary evolution leads to the formation of many peculiar stars, including blue
stragglers and hot subdwarfs. We will discuss binary evolution and the
formation of blue stragglers and hot subdwarfs, and show that those hot objects
are important in the study of evolutionary population synthesis (EPS), and
conclude that binary interactions should be included in the study of EPS.
Indeed, binary interactions make a stellar population younger (hotter), and the
far-ultraviolet (UV) excess in elliptical galaxies is shown to be most likely
resulted from binary interactions. This has major implications for
understanding the evolution of the far-UV excess and elliptical galaxies in
general. In particular, it implies that the far-UV excess is not a sign of age,
as had been postulated prviously and predicts that it should not be strongly
dependent on the metallicity of the population, but exists universally from
dwarf ellipticals to giant ellipticals.Comment: Oral talk on IAUS 262, Brazi
Light Hadron Spectroscopy and Decay at BESIII
Light hadron spectroscopy plays an important role in understanding the decay
dynamics of unconventional hadronic states, such as strangeonium and glueballs.
BESIII provides an ideal avenue to search for these exotic states thanks to a
huge amount of data recorded at various energy points in the tau-charm mass
region including J/psi resonance. This report summarizes recent results of the
BESIII experiment related to the glueballs and strangeonium-like states.Comment: 6 pages, 5 figures, Conference proceeding of FPCP-201
Recommended from our members
Deterministic and probabilistic analyses of offshore pile systems
The offshore pile system capacity and the pile capacity model biases are important aspects in the assessment of existing offshore platforms and in the performance reliability that is achieved using the state of practice. The objectives of this research are to improve understanding of the pile system behavior, to calibrate the pile system capacity model bias factors, and to evaluate the reliabilities of offshore pile systems.
A simplified single pile failure surface in terms of three dimensional pile head loads is proposed based on the analytical lower and upper solutions, and is verified through finite element analyses. Numerical lower and upper bound models are then proposed for the ultimate capacity of a pile system, and are shown to be efficient and be effective in considering global torsion and out-of-plane failures. The evidence from the survival of offshore platforms indicates that (1) well conductors should be included in assessing the pile system ultimate capacity; (2) static p-y curves should be used which increases the pile system lateral capacity by 10 to 20%; (3) the mean value of the steel yield strength should be used; (4) jacket leg stubs should be included; and (5) site-specific geotechnical information is important.
The model bias factors in the API load and resistance design recipe are calibrated through Bayesâ Theorem based on the predicted and observed performance of eighteen offshore platforms in recent Gulf of Mexico hurricanes. The API load and resistance design recipe is calibrated to be close to unbiased for predicting the jacket system performance; be slightly conservative for predicting a foundation overturning failure in clay; and be conservative for predicting a lateral failure in clay and a foundation overturning failure in sand.
The reliability of a pile system is shown to be insensitive to water depths and locations in the Gulf of Mexico, but depends on the pile layout, number of piles, loading direction, and expected failure mode. The pile system redundancy (a measure of capacity beyond failure of the first element) and robustness (a measure of capacity when the system is damaged) depend on the failure mode, pile geometry and layout, and loading directions. In general, the 8-leg pile system is more redundant and more robust than the 3-leg and 4-leg pile systems. The complexity (a measure of the how well the most critically-loaded element represents all elements) depends on the pile layout, the expected failure mode of a single pile and the pile capacity uncertainty. The complexity is generally small, indicating that the failure probability of the most critically-loaded pile is representative of the failure probabilities for all piles.Curriculum and Instructio
Recommended from our members
Using and saving randomness
Randomness is ubiquitous and exceedingly useful in computer science. For example, in sparse recovery, randomized algorithms are more efficient and robust than their deterministic counterparts. At the same time, because random sources from the real world are often biased and defective with limited entropy, high-quality randomness is a precious resource. This motivates the studies of pseudorandomness and randomness extraction. In this thesis, we explore the role of randomness in these areas. Our research contributions broadly fall into two categories: learning structured signals and constructing pseudorandom objects. Learning a structured signal. One common task in audio signal processing is to compress an interval of observation through finding the dominating k frequencies in its Fourier transform. We study the problem of learning a Fourier-sparse signal from noisy samples, where [0, T] is the observation interval and the frequencies can be âoff-gridâ. Previous methods for this problem required the gap between frequencies to be above 1/T, which is necessary to robustly identify individual frequencies. We show that this gap is not necessary to recover the signal as a whole: for arbitrary k-Fourier-sparse signals under ââ bounded noise, we provide a learning algorithm with a constant factor growth of the noise and sample complexity polynomial in k and logarithmic in the bandwidth and signal-to-noise ratio. In addition to this, we introduce a general method to avoid a condition number depending on the signal family F and the distribution D of measurement in the sample vi complexity. In particular, for any linear family F with dimension d and any distribution D over the domain of F, we show that this method provides a robust learning algorithm with O(d log d) samples. Furthermore, we improve the sample complexity to O(d) via spectral sparsification (optimal up to a constant factor), which provides the best known result for a range of linear families such as low degree multivariate polynomials. Next, we generalize this result to an active learning setting, where we get a large number of unlabeled points from an unknown distribution and choose a small subset to label. We design a learning algorithm optimizing both the number of unlabeled points and the number of labels. Pseudorandomness. Next, we study hash families, which have simple forms in theory and efficient implementations in practice. The size of a hash family is crucial for many applications such as derandomization. In this thesis, we study the upper bound on the size of hash families to fulfill their applications in various problems. We first investigate the number of hash functions to constitute a randomness extractor, which is equivalent to the degree of the extractor. We present a general probabilistic method that reduces the degree of any given strong extractor to almost optimal, at least when outputting few bits. For various almost universal hash families including Toeplitz matrices, Linear Congruential Hash, and Multiplicative Universal Hash, this approach significantly improves the upper bound on the degree of strong extractors in these hash families. Then we consider explicit hash families and multiple-choice schemes in the classical problems of placing balls into bins. We construct explicit hash families of almost-polynomial size that derandomizes two classical multiple-choice schemes, which match the maximum loads of a perfectly random hash function.Computer Science
Recommended from our members
Resource allocation in service and logistics systems
Resource allocation is a problem commonly encountered in strategic planning, where a typical objective is to minimize the associated cost or maximize the resulting profit. It is studied analytically and numerically for service and logistics systems in this dissertation, with the major resource being people, services or trucks. First, a staffing level problem is analyzed for large-scale single-station queueing systems. The system manager operates an Erlang-C queueing system with a quality-of-service (QoS) constraint on the probability that a customer is queued. However, in this model, the arrival rate is uncertain in the sense that even the arrival-rate distribution is not completely known to the manager. Rather, the manager has an estimate of the support of the arrival-rate distribution and the mean. The goal is to determine the number of servers needed to satisfy the quality of service constraint. Two models are explored. First, the constraint is enforced on an overall delay probability, given the probability that different feasible arrival-rate distributions are selected. In the second case, the constraint has to be satisfied by every possible distribution. For both problems, asymptotically optimal solutions are developed based on Halfin-Whitt type scalings. The work is followed by a discussion on solution uniqueness with a joint QoS constraint and a given arrival-rate distribution in multi-station systems. Second, an extension to Naorâs analysis on the joining or balking problem in observable M=M=1 queues and its variant in unobservable M=M=1 queues is presented to incorporate parameter uncertainty. The arrival-rate distribution is known to all, but the exact arrival rate is unknown in both cases. The optimal joining strategies are obtained and compared from the perspectives of individual customers, the social optimizer and the profit maximizer, where differences are recognized between the results for systems with deterministic and stochastic arrival rates. Finally, an integrated ordering and inbound shipping problem is formulated for an assembly plant with a large number of suppliers. The objective is to minimize the annual total cost with a static strategy. Potential transportation modes include full truckload shipping and less than truckload shipping, the former of which allows customized routing while the latter does not. A location-based model is applied in search of near-optimal solutions instead of an exact model with vehicle routing, and numerical experiments are conducted to investigate the insights of the problem.Operations Research and Industrial Engineerin
Symmetric Versus Nonsymmetric Structure of the Phosphorus Vacancy on InP(110)
The atomic and electronic structure of positively charged P vacancies on
InP(110) surfaces is determined by combining scanning tunneling microscopy,
photoelectron spectroscopy, and density-functional theory calculations. The
vacancy exhibits a nonsymmetric rebonded atomic configuration with a charge
transfer level 0.75+-0.1 eV above the valence band maximum. The scanning
tunneling microscopy (STM) images show only a time average of two degenerate
geometries, due to a thermal flip motion between the mirror configurations.
This leads to an apparently symmetric STM image, although the ground state
atomic structure is nonsymmetric.Comment: 5 pages including 3 figures. related publications can be found at
http://www.fhi-berlin.mpg.de/th/paper.htm
Recommended from our members
The study of roadway sustainability in Texas : a case study with the use of the Greenroads rating system
textIn the state of Texas, the roadway network consists of approximately 313,228 miles of roads (Federal Highway Administration, 2013), accounting for 7.61% of the public roads in the United States. To put it in perspective, this is equal to 12.6 times the circumference of Earth. In order to manage this network, the state and local transportation agencies use millions of tons of natural resources to construct and maintain these facilities. If these resources are not being properly used, Texas might end up wasting them, producing more pollutants, and imposing threats to its natural environment. Moreover, there is no way to quantify and record the efforts made by the Texas transportation community in becoming sustainable. Thus, there is a need to promote and keep track of ongoing sustainability efforts. In this study, we explore the trend of roadway sustainability in Texas, and propose a Texas version of sustainability rating system that is based on Greenroads. The Greenroads sustainability rating system is a third-party rating system developed by the University of Washington and aimed at recognizing sustainable practices in roadway projects. First, two of its projects in Texas are selected as the case study for the purpose of understanding the system. Second, 1,594 pavement projects are extracted from Texas highway construction database called Site Manager that is maintained by the Texas Department of Transportation (TxDOT) to understand the state of practice. Third, some material data that comes from TxDOT division engineers is included as well. Together with them, a Greenroads-based sustainability rating system, especially adapted in terms of material selection and pavement technology, is proposed. As a result, the implementation of this system is expected to spark more pursuits of roadway sustainability in Texas.Civil, Architectural, and Environmental Engineerin
Recommended from our members
The development of bias in perceptual and financial decision-making
Decisions are prone to bias. This can be seen in daily choices. For instance, when the markets are plunging, investors tend to sell stocks instead of purchasing them with lower prices because people in general are more sensitive to the potential losses than the potential gains, or loss averse, in making financial choices. This also can be seen in laboratory tests. When participants receive higher payoffs for successfully discriminating a visual stimulus as one choice against the other, they begin choosing this higher-rewarded option more often even though the objective evidence indicates the alternative. In my dissertation, I used mathematical models and functional magnetic resonance imaging (fMRI) to track the development of bias in perceptual and financial decision-making and presented evidence characterizing the experience-sensitive and domain-general decision-making process in the human brains. The first chapter showed that bias could be developed through associating decision contexts and reward feedback from trial to trial in perceptual decision-making. Although the surface task differed, this learning process involved the same prediction error driven mechanisms implemented in the dopaminergic system as in financial decision-making. Furthermore, the frontal cortex increased its strength of connection between visual and value systems that accounted for the growth of perceptual bias. The second chapter extended this feedback-driven acquisition process to examine the influences of experience on loss aversion in financial decision-making. The results showed that people learned to make riskier or more conservative decisions according to the feedback that they had received in different decision contexts. This alternation in loss aversion was achieved through modulation of the value systemâs sensitivity toward the potential gains in evaluation. The frontal cortex mediated this change. The third chapter used a mathematical model to identify the changes in financial decision-making that occurred faster than the temporal resolution of fMRI. The results suggested that people might simplify financial information into some rules of thumb for making a choice. These findings not only integrated the knowledge in different domains of decision neuroscience but also shed lights onto how one may refine the decision-making process against experiences.Psycholog
- âŠ